732 research outputs found
Shadoks Approach to Convex Covering
We describe the heuristics used by the Shadoks team in the CG:SHOP 2023
Challenge. The Challenge consists of 206 instances, each being a polygon with
holes. The goal is to cover each instance polygon with a small number of convex
polygons. Our general strategy is the following. We find a big collection of
large (often maximal) convex polygons inside the instance polygon and then
solve several set cover problems to find a small subset of the collection that
covers the whole polygon.Comment: SoCG CG:SHOP 2023 Challeng
On the Combinatorial Complexity of Approximating Polytopes
Approximating convex bodies succinctly by convex polytopes is a fundamental
problem in discrete geometry. A convex body of diameter
is given in Euclidean -dimensional space, where is a constant. Given an
error parameter , the objective is to determine a polytope of
minimum combinatorial complexity whose Hausdorff distance from is at most
. By combinatorial complexity we mean the
total number of faces of all dimensions of the polytope. A well-known result by
Dudley implies that facets suffice, and a dual
result by Bronshteyn and Ivanov similarly bounds the number of vertices, but
neither result bounds the total combinatorial complexity. We show that there
exists an approximating polytope whose total combinatorial complexity is
, where conceals a
polylogarithmic factor in . This is a significant improvement
upon the best known bound, which is roughly .
Our result is based on a novel combination of both old and new ideas. First,
we employ Macbeath regions, a classical structure from the theory of convexity.
The construction of our approximating polytope employs a new stratified
placement of these regions. Second, in order to analyze the combinatorial
complexity of the approximating polytope, we present a tight analysis of a
width-based variant of B\'{a}r\'{a}ny and Larman's economical cap covering.
Finally, we use a deterministic adaptation of the witness-collector technique
(developed recently by Devillers et al.) in the context of our stratified
construction.Comment: In Proceedings of the 32nd International Symposium Computational
Geometry (SoCG 2016) and accepted to SoCG 2016 special issue of Discrete and
Computational Geometr
Efficient Algorithms for Battleship
We consider an algorithmic problem inspired by the Battleship game. In the
variant of the problem that we investigate, there is a unique ship of shape which has been translated in the lattice . We assume that a
player has already hit the ship with a first shot and the goal is to sink the
ship using as few shots as possible, that is, by minimizing the number of
missed shots. While the player knows the shape , which position of has
been hit is not known.
Given a shape of lattice points, the minimum number of misses that
can be achieved in the worst case by any algorithm is called the Battleship
complexity of the shape and denoted . We prove three bounds on
, each considering a different class of shapes. First, we have for arbitrary shapes and the bound is tight for parallelogram-free shapes.
Second, we provide an algorithm that shows that if is an
HV-convex polyomino. Third, we provide an algorithm that shows that if is a digital convex set. This last result is obtained
through a novel discrete version of the Blaschke-Lebesgue inequality relating
the area and the width of any convex body.Comment: Conference version at 10th International Conference on Fun with
Algorithms (FUN 2020
The Cost of Perfection for Matchings in Graphs
Perfect matchings and maximum weight matchings are two fundamental
combinatorial structures. We consider the ratio between the maximum weight of a
perfect matching and the maximum weight of a general matching. Motivated by the
computer graphics application in triangle meshes, where we seek to convert a
triangulation into a quadrangulation by merging pairs of adjacent triangles, we
focus mainly on bridgeless cubic graphs. First, we characterize graphs that
attain the extreme ratios. Second, we present a lower bound for all bridgeless
cubic graphs. Third, we present upper bounds for subclasses of bridgeless cubic
graphs, most of which are shown to be tight. Additionally, we present tight
bounds for the class of regular bipartite graphs
On the ratio between maximum weight perfect matchings and maximum weight matchings in grids
Given a graph G that admits a perfect matching, we investigate the parameter η(G) (originally motivated by computer graphics applications) which is defined as follows. Among all nonnegative edge weight assignments, η(G) is the minimum ratio between (i) the maximum weight of a perfect matching and (ii) the maximum weight of a general matching. In this paper, we determine the exact value of η for all rectangular grids, all bipartite cylindrical grids, and all bipartite toroidal grids. We introduce several new techniques to this endeavor
Short Flip Sequences to Untangle Segments in the Plane
A (multi)set of segments in the plane may form a TSP tour, a matching, a
tree, or any multigraph. If two segments cross, then we can reduce the total
length with the following flip operation. We remove a pair of crossing
segments, and insert a pair of non-crossing segments, while keeping the same
vertex degrees. The goal of this paper is to devise efficient strategies to
flip the segments in order to obtain crossing-free segments after a small
number of flips. Linear and near-linear bounds on the number of flips were only
known for segments with endpoints in convex position. We generalize these
results, proving linear and near-linear bounds for cases with endpoints that
are not in convex position. Our results are proved in a general setting that
applies to multiple problems, using multigraphs and the distinction between
removal and insertion choices when performing a flip.Comment: 19 pages, 10 figure
Approximate Convex Intersection Detection with Applications to Width and Minkowski Sums
Approximation problems involving a single convex body in R^d have received a great deal of attention in the computational geometry community. In contrast, works involving multiple convex bodies are generally limited to dimensions d 0, we show how to independently preprocess two polytopes A,B subset R^d into data structures of size O(1/epsilon^{(d-1)/2}) such that we can answer in polylogarithmic time whether A and B intersect approximately. More generally, we can answer this for the images of A and B under affine transformations. Next, we show how to epsilon-approximate the Minkowski sum of two given polytopes defined as the intersection of n halfspaces in O(n log(1/epsilon) + 1/epsilon^{(d-1)/2 + alpha}) time, for any constant alpha > 0. Finally, we present a surprising impact of these results to a well studied problem that considers a single convex body. We show how to epsilon-approximate the width of a set of n points in O(n log(1/epsilon) + 1/epsilon^{(d-1)/2 + alpha}) time, for any constant alpha > 0, a major improvement over the previous bound of roughly O(n + 1/epsilon^{d-1}) time
Optimal Area-Sensitive Bounds for Polytope Approximation
Approximating convex bodies is a fundamental question in geometry and has a
wide variety of applications. Given a convex body of diameter in
for fixed , the objective is to minimize the number of
vertices (alternatively, the number of facets) of an approximating polytope for
a given Hausdorff error . The best known uniform bound, due to
Dudley (1974), shows that facets suffice.
While this bound is optimal in the case of a Euclidean ball, it is far from
optimal for ``skinny'' convex bodies.
A natural way to characterize a convex object's skinniness is in terms of its
relationship to the Euclidean ball. Given a convex body , define its surface
diameter to be the diameter of a Euclidean ball of the same
surface area as . It follows from generalizations of the isoperimetric
inequality that .
We show that, under the assumption that the width of the body in any
direction is at least , it is possible to approximate a convex
body using facets. This bound is
never worse than the previous bound and may be significantly better for skinny
bodies. The bound is tight, in the sense that for any value of ,
there exist convex bodies that, up to constant factors, require this many
facets.
The improvement arises from a novel approach to sampling points on the
boundary of a convex body. We employ a classical concept from convexity, called
Macbeath regions. We demonstrate that Macbeath regions in and 's polar
behave much like polar pairs. We then apply known results on the Mahler volume
to bound their number
Approximate Nearest Neighbor Searching with Non-Euclidean and Weighted Distances
We present a new approach to approximate nearest-neighbor queries in fixed
dimension under a variety of non-Euclidean distances. We are given a set of
points in , an approximation parameter , and
a distance function that satisfies certain smoothness and growth-rate
assumptions. The objective is to preprocess into a data structure so that
for any query point in , it is possible to efficiently report
any point of whose distance from is within a factor of
of the actual closest point.
Prior to this work, the most efficient data structures for approximate
nearest-neighbor searching in spaces of constant dimensionality applied only to
the Euclidean metric. This paper overcomes this limitation through a method
called convexification. For admissible distance functions, the proposed data
structures answer queries in logarithmic time using space, nearly matching the best known bounds for the
Euclidean metric. These results apply to both convex scaling distance functions
(including the Mahalanobis distance and weighted Minkowski metrics) and Bregman
divergences (including the Kullback-Leibler divergence and the Itakura-Saito
distance)
Complexity dichotomy on partial grid recognition
Deciding whether a graph can be embedded in a grid using only unit-length
edges is NP-complete, even when restricted to binary trees. However, it is not
difficult to devise a number of graph classes for which the problem is
polynomial, even trivial. A natural step, outstanding thus far, was to provide
a broad classification of graphs that make for polynomial or NP-complete
instances. We provide such a classification based on the set of allowed vertex
degrees in the input graphs, yielding a full dichotomy on the complexity of the
problem. As byproducts, the previous NP-completeness result for binary trees
was strengthened to strictly binary trees, and the three-dimensional version of
the problem was for the first time proven to be NP-complete. Our results were
made possible by introducing the concepts of consistent orientations and robust
gadgets, and by showing how the former allows NP-completeness proofs by local
replacement even in the absence of the latter
- âŠ